Docker Model Runner

Requires: Docker Engine or Docker Desktop (Windows) 4.41+ or Docker Desktop (MacOS) 4.40+
For: See Requirements section below

Docker Model Runner (DMR) makes it easy to manage, run, and deploy AI models using Docker. Designed for developers, Docker Model Runner streamlines the process of pulling, running, and serving large language models (LLMs) and other AI models directly from Docker Hub or any OCI-compliant registry.

With seamless integration into Docker Desktop and Docker Engine, you can serve models via OpenAI-compatible APIs, package GGUF files as OCI Artifacts, and interact with models from both the command line and graphical interface.

Whether you're building generative AI applications, experimenting with machine learning workflows, or integrating AI into your software development lifecycle, Docker Model Runner provides a consistent, secure, and efficient way to work with AI models locally.

Key features

  • Pull and push models to and from Docker Hub
  • Serve models on OpenAI-compatible APIs for easy integration with existing apps
  • Support for both llama.cpp and vLLM inference engines (vLLM currently supported on Linux x86_64/amd64 with NVIDIA GPUs only)
  • Package GGUF and Safetensors files as OCI Artifacts and publish them to any Container Registry
  • Run and interact with AI models directly from the command line or from the Docker Desktop GUI
  • Manage local models and display logs
  • Display prompt and response details
  • Conversational context support for multi-turn interactions

Requirements

Docker Model Runner is supported on the following platforms:

Windows(amd64):

  • NVIDIA GPUs
  • NVIDIA drivers 576.57+

Windows(arm64):

  • OpenCL for Adreno

  • Qualcomm Adreno GPU (6xx series and later)

    Note

    Some llama.cpp features might not be fully supported on the 6xx series.

  • Apple Silicon

Docker Engine only:

  • Supports CPU, NVIDIA (CUDA), AMD (ROCm), and Vulkan backends
  • Requires NVIDIA driver 575.57.08+ when using NVIDIA GPUs

How Docker Model Runner works

Models are pulled from Docker Hub the first time you use them and are stored locally. They load into memory only at runtime when a request is made, and unload when not in use to optimize resources. Because models can be large, the initial pull may take some time. After that, they're cached locally for faster access. You can interact with the model using OpenAI-compatible APIs.

Docker Model Runner supports both llama.cpp and vLLM as inference engines, providing flexibility for different model formats and performance requirements. For more details, see the Docker Model Runner repository.

Tip

Using Testcontainers or Docker Compose? Testcontainers for Java and Go, and Docker Compose now support Docker Model Runner.

Known issues

docker model is not recognised

If you run a Docker Model Runner command and see:

docker: 'model' is not a docker command

It means Docker can't find the plugin because it's not in the expected CLI plugins directory.

To fix this, create a symlink so Docker can detect it:

$ ln -s /Applications/Docker.app/Contents/Resources/cli-plugins/docker-model ~/.docker/cli-plugins/docker-model

Once linked, rerun the command.

Share feedback

Thanks for trying out Docker Model Runner. To report bugs or request features, open an issue on GitHub. You can also give feedback through the Give feedback link next to the Enable Docker Model Runner setting.

Next steps

Get started with DMR